首页> 外文OA文献 >PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
【2h】

PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning

机译:pRm-RL:结合加固的远程机器人导航任务   学习和基于抽样的计划

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We present PRM-RL, a hierarchical method for long-range navigation taskcompletion that combines sampling-based path planning with reinforcementlearning (RL) agents. The RL agents learn short-range, point-to-pointnavigation policies that capture robot dynamics and task constraints withoutknowledge of the large-scale topology, while the sampling-based plannersprovide an approximate map of the space of possible configurations of the robotfrom which collision-free trajectories feasible for the RL agents can beidentified. The same RL agents are used to control the robot under thedirection of the planning, enabling long-range navigation. We use theProbabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents areconstructed using feature-based and deep neural net policies in continuousstate and action spaces. We evaluate PRM-RL on two navigation tasks withnon-trivial robot dynamics: end-to-end differential drive indoor navigation inoffice environments, and aerial cargo delivery in urban environments with loaddisplacement constraints. These evaluations included both simulatedenvironments and on-robot tests. Our results show improvement in navigationtask completion over both RL agents on their own and traditional sampling-basedplanners. In the indoor navigation task, PRM-RL successfully completes up to215 meters long trajectories under noisy sensor conditions, and the aerialcargo delivery completes flights over 1000 meters without violating the taskconstraints in an environment 63 million times larger than used in training.
机译:我们提出了PRM-RL,这是一种用于远程导航任务完成的分层方法,将基于采样的路径规划与增强学习(RL)代理相结合。 RL代理学习短距离的点对点导航策略,这些策略捕获了机器人动态和任务约束,而无需了解大规模拓扑,而基于采样的计划程序则提供了机器人可能配置空间的近似图,从中可以碰撞到可以确定RL代理可行的自由轨迹。在计划的指导下,使用相同的RL代理控制机器人,从而实现远程导航。我们将概率路线图(PRM)用于基于采样的计划程序。在连续状态和动作空间中,使用基于特征的深度神经网络策略构造RL代理。我们在具有非平凡的机器人动力学的两个导航任务上评估PRM-RL:端到端差分驱动室内导航非办公室环境,以及在具有负载位移约束的城市环境中的航空货物交付。这些评估包括模拟环境和机器人测试。我们的结果表明,相对于RL代理商自己的和传统的基于抽样的计划者而言,导航任务完成能力都有所提高。在室内导航任务中,PRM-RL在嘈杂的传感器条件下成功地完成了长达215米的轨迹,并且空中货运完成了1000米以上的飞行,而没有违反比训练中使用的63百万倍的环境中的任务约束。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号